2 research outputs found

    Grounding semantic cognition using computational modelling and network analysis

    Get PDF
    The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering “toy-models”. Despite incorporating newer techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded features and embodied raw sensorimotor signals. We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment

    Extending symbol interdependency: perceptual scene vectors

    Get PDF
    Louwerse (2011) advances the symbol interdependency hypothesis by demonstrating empirically the importance of statistical regularities in linguistic surface structures. Symbol interdependency posits that meaning extraction attributed to embodied representations or algorithms should instead be attributed to language. In a series of seven computational experiments, we find that language surface structure best encodes meaning when the structural cues are sufficiently constrained by modeller- determined feature sets, with performance deteriorating for randomly selected language surface cues. We further find that Latent Semantic Analysis’ meaning encoding improves as weaker dimensions are removed. These findings collectively indicate that although language is important, increasing the relevance of linguistic statistical regularities is also critical. We introduce Perceptual Scene Vectors (PSVs), a novel approach that uses object co-occurrences from images to automatically extract strong associative and taxonomic relationships. This approach extracts these associations more successfully than the language-based approaches, measured both qualitatively and quantitatively, with an original application of a cluster-correspondence metric. PSVs encode meaning without modellers hand-coding relevant features. This provides an ecologically valid approach to extending symbol interdependency beyond language and partially solving the relevance problem in semantics by grounding meaning extraction in real-world visual scenes
    corecore